International Journal of Artificial Intelligence and Machine Learning
|
Volume 2, Issue 2, July 2022 | |
Research PaperOpenAccess | |
APTx: Better Activation Function than MISH, SWISH, and ReLU’s Variants used in Deep Learning |
|
Ravin Kumar1* |
|
1Department of Computer Science, Meerut Institute of Engineering and Technology, Meerut-250005, Uttar Pradesh, India. E-mail: ravin.kumar.cs.2013@miet.ac.in
*Corresponding Author | |
Int.Artif.Intell.&Mach.Learn. 2(2) (2022) 56-61, DOI: https://doi.org/10.51483/IJAIML.2.2.2022.56-61 | |
Received: 12/03/2022|Accepted: 21/06/2022|Published: 05/07/2022 |
Activation Functions introduce non-linearity in the deep neural networks. This nonlinearity helps the neural networks learn faster and efficiently from the dataset. In deep learning, many activation functions are developed and used based on the type of problem statement. ReLU’s variants, SWISH, and MISH are goto activation functions. MISH function is considered having similar or even or even better performance than SWISH, and much better than ReLU. In this paper, we propose an activation function named APTx which behaves similar to MISH, but requires lesser mathematical operations to compute. The lesser computational requirements of APTx does speed up the model training, and thus also reduces the hardware requirement for the deep learning model.
Keywords: Activation functions, ReLU, Leaky ReLU, ELU, SWISH, MISH, Neural networks
Full text | Download |
Copyright © SvedbergOpen. All rights reserved